Goto

Collaborating Authors

 vector space









A Appendix Organization

Neural Information Processing Systems

This appendix is organized as follows: in Section B, C and D we provide the missing proofs of Theorems 3, 4 and 5. In Section E we provide detailed version of Theorems 6 and 7 containing all constants. In Section F we provide a version of Theorem 2 with all constants for completeness. In this section we provide the missing proof of Theorem 3, restated below: Lemma 3. Let W be a real vector space and So now it remains only to show the regret bound. Now we recall the following consequence of concavity of the square root function (see Auer et al. [2002], Duchi et al. [2010] for proofs): for any sequence non-negative numbers x In this section we provide the missing proof of Theorem 4, restated below: Lemma 4. Now it remains to use the regret bound on A. Observe that |s In this section, we provide the missing proof of Theorem 5, restated below: Theorem 5.


210b7ec74fc9cec6fb8388dbbdaf23f7-Supplemental.pdf

Neural Information Processing Systems

Let Z be the set of all indices` [n] such thatα[`] 6= 0. Let Z be the set of all indices` Z such that the`th variable is constrained to be integral.


Quantum Embedding of Knowledge for Reasoning

Neural Information Processing Systems

Statistical Relational Learning (SRL) methods are the most widely used techniques to generate distributional representations of the symbolic Knowledge Bases (KBs). These methods embed any given KB into a vector space by exploiting statistical similarities among its entities and predicates but without any guarantee of preserving the underlying logical structure of the KB. This, in turn, results in poor performance of logical reasoning tasks that are solved using such distributional representations. We present a novel approach called Embed2Reason (E2R) that embeds a symbolic KB into a vector space in a logical structure preserving manner. This approach is inspired by the theory of Quantum Logic. Such an embedding allows answering membership based complex logical reasoning queries with impressive accuracy improvements over popular SRL baselines.